我们从单一的图像处理多的人三维人体姿势和体形估计的问题。虽然这个问题可以通过应用单人对同一场景接近多次来解决,近期的作品都示出的建筑在深深的架构,同时推理通过强制执行,例如,深度为了所有的人都在现场以整体方式的优点限制或重建的机构之间相互渗透最小化。但是,现有的方法仍然无法捕捉所造成的内在的体规模和深度的模糊人的规模变化。在这项工作中,我们处理的,通过强制所有的人的脚留在地面制定是学习的适当机构的规模和相对相机姿态新颖的优化方案,这一挑战。在MuPoTS-3D和3DPW数据集进行彻底的评估表明,我们的做法是能够稳健地估计多的人的身体翻译和形状,而取回自己的空间布置,始终如一改善当前国家的最先进的,尤其是在场面与人非常不同的高度
translated by 谷歌翻译
Training large, deep neural networks to convergence can be prohibitively expensive. As a result, often only a small selection of popular, dense models are reused across different contexts and tasks. Increasingly, sparsely activated models, which seek to decouple model size from computation costs, are becoming an attractive alternative to dense models. Although more efficient in terms of quality and computation cost, sparse models remain data-hungry and costly to train from scratch in the large scale regime. In this work, we propose sparse upcycling -- a simple way to reuse sunk training costs by initializing a sparsely activated Mixture-of-Experts model from a dense checkpoint. We show that sparsely upcycled T5 Base, Large, and XL language models and Vision Transformer Base and Large models, respectively, significantly outperform their dense counterparts on SuperGLUE and ImageNet, using only ~50% of the initial dense pretraining sunk cost. The upcycled models also outperform sparse models trained from scratch on 100% of the initial dense pretraining computation budget.
translated by 谷歌翻译
Most benchmarks for studying surgical interventions focus on a specific challenge instead of leveraging the intrinsic complementarity among different tasks. In this work, we present a new experimental framework towards holistic surgical scene understanding. First, we introduce the Phase, Step, Instrument, and Atomic Visual Action recognition (PSI-AVA) Dataset. PSI-AVA includes annotations for both long-term (Phase and Step recognition) and short-term reasoning (Instrument detection and novel Atomic Action recognition) in robot-assisted radical prostatectomy videos. Second, we present Transformers for Action, Phase, Instrument, and steps Recognition (TAPIR) as a strong baseline for surgical scene understanding. TAPIR leverages our dataset's multi-level annotations as it benefits from the learned representation on the instrument detection task to improve its classification capacity. Our experimental results in both PSI-AVA and other publicly available databases demonstrate the adequacy of our framework to spur future research on holistic surgical scene understanding.
translated by 谷歌翻译
Modern deep neural networks tend to be evaluated on static test sets. One shortcoming of this is the fact that these deep neural networks cannot be easily evaluated for robustness issues with respect to specific scene variations. For example, it is hard to study the robustness of these networks to variations of object scale, object pose, scene lighting and 3D occlusions. The main reason is that collecting real datasets with fine-grained naturalistic variations of sufficient scale can be extremely time-consuming and expensive. In this work, we present Counterfactual Simulation Testing, a counterfactual framework that allows us to study the robustness of neural networks with respect to some of these naturalistic variations by building realistic synthetic scenes that allow us to ask counterfactual questions to the models, ultimately providing answers to questions such as "Would your classification still be correct if the object were viewed from the top?" or "Would your classification still be correct if the object were partially occluded by another object?". Our method allows for a fair comparison of the robustness of recently released, state-of-the-art Convolutional Neural Networks and Vision Transformers, with respect to these naturalistic variations. We find evidence that ConvNext is more robust to pose and scale variations than Swin, that ConvNext generalizes better to our simulated domain and that Swin handles partial occlusion better than ConvNext. We also find that robustness for all networks improves with network scale and with data scale and variety. We release the Naturalistic Variation Object Dataset (NVD), a large simulated dataset of 272k images of everyday objects with naturalistic variations such as object pose, scale, viewpoint, lighting and occlusions. Project page: https://counterfactualsimulation.github.io
translated by 谷歌翻译
Spectral methods provide consistent estimators for community detection in dense graphs. However, their performance deteriorates as the graphs become sparser. In this work we consider a random graph model that can produce graphs at different levels of sparsity, and we show that graph neural networks can outperform spectral methods on sparse graphs. We illustrate the results with numerical examples in both synthetic and real graphs.
translated by 谷歌翻译
最小的侵入性手术是高度操作员,依赖于冗长的程序时间,导致患者疲劳和风险。为了减轻这些风险,实时系统可以通过提供对场景的清晰了解并避免在操作过程中避免错误估计来帮助外科医生导航和跟踪工具。尽管已经朝这个方向做出了几项努力,但缺乏不同的数据集,并且非常动态的场景及其在每个患者中的可变性都需要实现强大的系统的重大障碍。在这项工作中,我们对最新基于机器学习的方法进行了系统评价,包括手术工具定位,细分,跟踪和3D场景感知。此外,我们提出了这些发明方法的当前差距和方向,并在这些方法的临床整合背后提供了合理的理性。
translated by 谷歌翻译
大型文本对图像模型在AI的演变中取得了显着的飞跃,从而使图像从给定的文本提示中实现了高质量和多样化的图像合成。但是,这些模型缺乏在给定的参考集中模仿受试者的外观,并在不同情况下合成它们的新颖性。在这项工作中,我们提出了一种新的方法,用于“个性化”文本图像扩散模型(将它们专门针对用户的需求)。仅作为一个主题的几张图像给出,我们将验证的文本对图像模型(图像,尽管我们的方法不限于特定模型),以便它学会了将唯一标识符与该特定主题结合。一旦将受试者嵌入模型的输出域中,就可以使用唯一标识符来合成主题的完全新颖的光真逼真的图像在不同场景中的上下文化。通过利用具有新的自动构基特异性的先前保存损失的语义先验嵌入到模型中,我们的技术可以在参考图像中未出现的不同场景,姿势,视图和照明条件中合成主题。我们将技术应用于几个以前无用的任务,包括主题重新定义,文本指导的视图合成,外观修改和艺术渲染(所有这些都保留了主题的关键特征)。项目页面:https://dreambooth.github.io/
translated by 谷歌翻译
法律判决预测是NLP,AI和法律联合领域最受欢迎的领域之一。通过法律预测,我们是指能够预测特定司法特征的智能系统,例如司法结果,司法阶级,可以预测特定案例。在这项研究中,我们使用AI分类器来预测巴西法律体系中的司法结果。为此,我们开发了一个文本爬网,以从巴西官方电子法律系统中提取数据。这些文本构成了二级谋杀和主动腐败案件的数据集。我们应用了不同的分类器,例如支持向量机和神经网络,通过分析数据集中的文本功能来预测司法结果。我们的研究表明,回归树,封闭的重复单元和分层注意力网络给出了不同子集的较高指标。作为最终目标,我们探讨了一种算法的权重,即分层注意力网络,以找到用于免除或定罪被告的最重要词的样本。
translated by 谷歌翻译
在处理现实世界优化问题时,决策者通常会面临与部分信息,未知参数或这些问题之间的复杂关系与问题决策变量相关的高度不确定性。在这项工作中,我们开发了一种新颖的机会限制学习(CCL)方法,重点是混合组合线性优化问题,该问题结合了机会约束和约束学习文献的思想。机会约束为要实现的单个或一组约束设定了概率置信度,而约束学习方法旨在通过预测模型对问题变量之间的功能关系进行建模。当我们需要为其响应变量设定进一步的界限时,就会出现一个主要问题之一:实现这些变量直接与预测模型的准确性及其概率行为有关。从这个意义上讲,CCL利用可线化的机器学习模型来估计学习变量的条件分位数,从而为机会约束提供了数据驱动的解决方案。已经开发了一个开放式软件,可以由从业人员使用。此外,在两个现实世界中的案例研究中已经测试了CCL的益处,证明当设定概率界限以进行学习的约束时,如何将鲁棒性添加到最佳解决方案中。
translated by 谷歌翻译
图像字幕是当前的研究任务,用于使用场景中的对象及其关系来描述图像内容。为了应对这项任务,使用了两个重要的研究领域,人为的视觉和自然语言处理。在图像字幕中,就像在任何计算智能任务中一样,性能指标对于知道方法的性能(或坏)至关重要。近年来,已经观察到,基于n-gram的经典指标不足以捕获语义和关键含义来描述图像中的内容。为了衡量或不进行最新指标的集合,在本手稿中,我们对使用众所周知的COCO数据集进行了对几种图像字幕指标的评估以及它们之间的比较。为此,我们设计了两种情况。 1)一组人工构建字幕,以及2)比较某些最先进的图像字幕方法的比较。我们试图回答问题:当前的指标是否有助于制作高质量的标题?实际指标如何相互比较?指标真正测量什么?
translated by 谷歌翻译